Three principles, seven fields, and a prompt library that makes your requirements consumable by every AI tool in the pipeline — without rewriting them for each one.
Three principles that separate requirements AI can act on from requirements AI will misinterpret.
If a requirement relies on a human to "just know" context that isn't written down, it's not AI-ready. AI tools can't read minds or absorb organizational history through osmosis. If the context isn't in the document, it isn't available to the tool — or the developer reading it six months from now.
If it describes a quality — "user-friendly," "intuitive," "performant" — without defining observable behavior, it's not AI-ready. Describe what happens: what the system displays, calculates, validates, or prevents. "Fast" means nothing to a code generator. "Responds within 500ms" means everything.
If it bundles five workflows into one paragraph, it's not AI-ready. AI code generation tools work best with single-responsibility requirements. A requirement that covers enrollment tracking, audit logging, and export formatting in one story will produce code that conflates all three — difficult to test and impossible to iterate.
The difference these principles create isn't subtle. Here's the same requirement written both ways:
This is aspirational and implicit. "Securely" requires the reader to know what security means in a SOC 2 Type II and PCI-DSS regulated financial environment — and AI tools do not. An AI code generator handed this requirement will produce something that feels secure and is not.
This is behavioral, explicit, and decomposed. Every clause is directly actionable by Copilot and Claude. The audit logger, the encryption layer, and the log sanitizer are three separately testable requirements — all in one sentence.
Seven fields. Each one feeds a different part of the AI toolchain.
As a [specific role + context], I need to [action] so that [measurable outcome]. The role must be specific enough that it defines constraints — "Loan Officer reviewing a multi-branch commercial lending portfolio" generates very different requirements than "user."
Given / When / Then — covering the happy path, boundary conditions, and error states. Testable means a developer can answer "pass" or "fail" without interpretation. If it requires judgment to evaluate, it needs to be rewritten.
Inputs and outputs: field names, data types, validation rules, sources, and calculated fields with their formulas. Copilot generates data models and validation logic from this field — without it, it guesses, and gets it wrong in ways that are expensive to fix in code review.
Layout, interactions, and every named state: loading, empty, success, error. "A sortable table" is not a behavioral description. "A table that re-sorts within 500ms on column header click without a full page reload" is. Figma Make and Claude Design produce usable starting points from the second kind.
Boundary conditions, timeout scenarios, concurrent users, and the data states that most stakeholders don't think about until QA finds them. Document these explicitly — AI test generation tools produce tests for exactly what you write here and nothing more.
SOC 2 Type II and PII handling rules, audit trail requirements, data retention policies, and accessibility standards (WCAG 2.1 AA minimum for PCI-DSS environments). AI tools will not infer regulatory requirements. If it isn't in the requirement, it won't be in the code.
Upstream systems providing data, downstream consumers reading the output, APIs, shared data contracts, and feature flags. Claude used for architecture analysis treats this field as the map of the system — missing entries mean generated components that conflict with real integrations at the worst possible time.
If a story contains multiple “and” clauses, split it before you hand it to AI tooling. One behavior per requirement produces cleaner code generation, cleaner test coverage, and cleaner traceability.
Each template field has a direct, traceable connection to a specific AI tool failure mode.
| Template Field | AI Tool It Feeds | What Happens Without It |
|---|---|---|
| User Story + Acceptance Criteria | Copilot, Claude (code gen + test gen) | AI generates code that doesn't match intended behavior. Tests become impossible to auto-generate because there's no behavioral specification to derive them from. |
| Data Context | Copilot (data models, validation) | Copilot guesses at data types and gets them wrong. Developers manually translate field names and validation rules into code — the exact repetitive work AI should be handling. |
| UI/UX Behavioral Description | Figma Make, Claude Design | "A form" produces a generic form. Detailed behavioral descriptions — states, interactions, column behavior, sort order — produce usable starting points for design that require iteration, not reconstruction. |
| Edge Cases & Error States | Claude (test generation) | AI test generation tools derive test scenarios from what you've written. No edge cases means no edge case tests. QA finds them instead, in staging, at the end of the sprint. |
| Compliance & Regulatory Notes | Copilot, Claude (code gen) | AI tools will not infer SOC 2 Type II requirements, audit trail obligations, or PII handling rules. If AES-256 encryption and export audit logging are not specified, they will not appear in generated code. |
| Dependencies & Integrations | Claude (architecture analysis) | AI generates components in isolation that conflict with existing API contracts, miss integration points, or duplicate data that already lives upstream — technical debt introduced before the first commit. |
In every case, a missing field doesn't cause the AI tool to fail visibly — it causes it to succeed silently at the wrong thing. The output looks plausible. It compiles. It's wrong. The cost of that wrongness accumulates fastest when the requirement gets to Copilot, because code that looks correct is harder to audit than a design that looks off.
The vocabulary you use determines whether AI tools can act on your requirements or have to interpret them.
Before locking a requirement, paste it into Claude with this prompt: "Review this requirement and flag any subjective adjectives, ambiguous pronouns, implied knowledge, or compound statements that should be split into separate requirements." It takes 30 seconds and catches the issues that become expensive when they reach the development sprint.
What a complete, AI-ready requirement looks like in practice — not abbreviated, not idealized.
This is the Loan Application Review Table requirement for portfolio LOAN-2024-Q3, written with all seven fields populated. This is the version you hand to Copilot for data model generation, Claude for test case derivation, and Figma Make for initial layout. No rewriting required per tool — the same document feeds all three.
Hand the Data Context to Copilot and ask it to generate a TypeScript interface and validation function — done in under two minutes. Paste the UI/UX section into Figma Make and ask for a layout starting point. Paste Acceptance Criteria plus Edge Cases into Claude and ask for a complete test suite. Same document, three tools, no rewriting. That's what AI-ready means.
Six ready-to-use prompts that cover the full requirements lifecycle — from intake through ADO formatting.
Here are my notes from a stakeholder meeting about [topic]. Extract all requirements as user stories with acceptance criteria using this template: [paste template]. Flag any ambiguities and list follow-up questions that need to be answered before development can begin.
Review this requirement for AI-readiness. Identify ambiguities, missing edge cases, implicit assumptions, and any areas where an AI code generation tool would need to guess at the intended behavior. Suggest specific improvements for each issue you find.
This requirement is too broad for a single sprint story. Break it into smaller, independently deliverable user stories that each describe a single user action. Each story should be completable in one sprint and testable in isolation from the others.
Given this requirement and acceptance criteria, generate a comprehensive set of test scenarios including: happy path, boundary cases, error states, and edge cases. For each scenario, include the Given/When/Then structure and the expected pass/fail condition.
Format these requirements as Azure DevOps work items with title, description, acceptance criteria, and relevant tags. Also produce a parallel version formatted for Helix ALM import. Flag any fields that require manual input before import (assigned to, sprint, etc.)
I've attached a Teams meeting transcript (.vtt) and our requirements template. Populate the template from the transcript. Focus on functional requirements, acceptance criteria, and action items. For anything unclear in the transcript, list it as an open question rather than guessing at the intent.